Draw triangle using OpenGL ES for iOS - ios

I am not an OpenGL expert and am having difficulty drawing a triangle to the screen. I am modifying an old sample project from Apple (GLPaint) that I have ported to Swift. The project is a basic finger drawing application, and I would like to modify it so that I can draw shapes.
I am able to draw points (GL_POINTS), but not triangles (GL_TRIANGLES)
I found a similar post where a user was attempting to do the same as me, but it did not help:
Draw Square with OpenGL ES for iOS
Below is the code that I use to successfully draw points to the screen:
EAGLContext.setCurrent(context)
glBindFramebuffer(GL_FRAMEBUFFER.ui, viewFramebuffer)
let squareVertices : [GLfloat] = [
500.0, 500.0,
1000.0, 500.0,
1000.0, 1000.0,
]
glDisable(GL_TEXTURE_2D.ui);
glColor4f(1.0, 0.0, 0.0, 1.0);
glEnableVertexAttribArray(ATTRIB_VERTEX.ui)
glVertexAttribPointer(ATTRIB_VERTEX.ui, 2, GL_FLOAT.ui, GL_FALSE.ub, 0, squareVertices)
glUseProgram(program[PROGRAM_POINT].id)
glDrawArrays(GL_POINTS.ui, 0, 3)
glBindRenderbuffer(GL_RENDERBUFFER.ui, viewRenderbuffer)
context.presentRenderbuffer(GL_RENDERBUFFER.l)
Below is initialization code:
private func initGL() -> Bool {
// Generate IDs for a framebuffer object and a color renderbuffer
glGenFramebuffers(1, &viewFramebuffer)
glGenRenderbuffers(1, &viewRenderbuffer)
glBindFramebuffer(GL_FRAMEBUFFER.ui, viewFramebuffer)
glBindRenderbuffer(GL_RENDERBUFFER.ui, viewRenderbuffer)
// This call associates the storage for the current render buffer with the EAGLDrawable (our CAEAGLLayer)
// allowing us to draw into a buffer that will later be rendered to screen wherever the layer is (which corresponds with our view).
context.renderbufferStorage(GL_RENDERBUFFER.l, from: (self.layer as! EAGLDrawable))
glFramebufferRenderbuffer(GL_FRAMEBUFFER.ui, GL_COLOR_ATTACHMENT0.ui, GL_RENDERBUFFER.ui, viewRenderbuffer)
glGetRenderbufferParameteriv(GL_RENDERBUFFER.ui, GL_RENDERBUFFER_WIDTH.ui, &backingWidth)
glGetRenderbufferParameteriv(GL_RENDERBUFFER.ui, GL_RENDERBUFFER_HEIGHT.ui, &backingHeight)
// For this sample, we do not need a depth buffer. If you do, this is how you can create one and attach it to the framebuffer:
// glGenRenderbuffers(1, &depthRenderbuffer);
// glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
// glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, backingWidth, backingHeight);
// glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);
if glCheckFramebufferStatus(GL_FRAMEBUFFER.ui) != GL_FRAMEBUFFER_COMPLETE.ui {
NSLog("failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER.ui))
return false
}
// Setup the view port in Pixels
glViewport(0, 0, backingWidth, backingHeight)
// Create a Vertex Buffer Object to hold our data
glGenBuffers(1, &vboId)
// Load the brush texture
brushTexture = self.texture(fromName: "ParticleNew.png")
// Load shaders
self.setupShaders()
// Enable blending and set a blending function appropriate for premultiplied alpha pixel data
glEnable(GL_BLEND.ui)
glBlendFunc(GL_ONE.ui, GL_ONE_MINUS_SRC_ALPHA.ui)
return true
}
To draw a triangle, I tried changing glDrawArrays(GL_POINTS.ui, 0, 3) to glDrawArrays(GL_TRIANGLES.ui, 0, 3) but nothing was drawn to the screen. There were also no error messages printed to console.
Any help or direction would be greatly appreciated.

Related

glDrawArrays is bound to single texture

I have a number of textures loaded using GLKTextureLoader. If I bind any of the loaded textures statically, each texture works as expected.
But I am trying to bind a random texture each glDrawArrays call, but the texture bound is always the same.
GLuint vbo = vboIDs[emitterNum];
GLKMatrix4 projectionMatrix = GLKMatrix4MakeScale(1.0f, aspectRatio, 1.0f);
glUseProgram(emitterShader[emitterNum].program);
glEnable(GL_TEXTURE_2D);
glActiveTexture (GL_TEXTURE0);
//Note: valid texture names are 0-31, but in my code I store texture names returned in an array and use them. Use arc4random here for simplicity
glBindTexture(GL_TEXTURE_2D, arc4random_uniform(31)); //use a random texture name
//glBindTexture(GL_TEXTURE_2D, 2); //If I use this line instead of the line above, it will draw texture 2, or any number I specify
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glUniformMatrix4fv(emitterShader[emitterNum].uProjectionMatrix, 1, 0, projectionMatrix.m);
//I set a number of uniforms such as:
glUniform1f(emitterShader[emitterNum].uTime, timeCurrentFrame);
glUniform1i(emitterShader[emitterNum].uTexture, 0);
//I set a number of vertex arrays such as:
glEnableVertexAttribArray(emitterShader[emitterNum].aShade);
glVertexAttribPointer(emitterShader[emitterNum].aShade, // Set pointer
4, // four components per particle (vec4)
GL_FLOAT, // Data is floating point type
GL_FALSE, // No fixed point scaling
sizeof(Particles), // No gaps in data
(void*)(offsetof(Particles, shade))); // Start from "shade" offset within bound buffer
GLsizei rowsToUse = emitters[emitterNum]->rows;
//Draw the arrays
glDrawArrays(GL_POINTS, 0, rowsToUse );
//Then clean up
glBindTexture(GL_TEXTURE_2D, 0);
glUseProgram(0);
glDisable(GL_TEXTURE_2D);
glBindBuffer(GL_ARRAY_BUFFER, 0);
I have tried putting the texture calls in various places, like where shown and direct before the glDrawArrays command, but no matter what - I can't make it bind to different textures unless done so statically.

OpenGL ES 2.0 iOS - draw a rectangle into stencil buffer and limit drawing only inside it

Do a good deed and help get someone (me) out of their misery, since it's New Year's Eve soon. I'm working on an iOS app, a coloring book for kids and I haven't stumbled upon OpenGL before (more precisely OpenGLES 2.0) so there's a big chance there's stuff I don't actually get in my code.
One of the tasks is to not let the brush spill out of the contour in which the user started drawing.
After reading and understanding some OpenGL basics, I found that using the stencil buffer is the right solution. This is my stencil buffer setup:
glClearStencil(0);
//clear the stencil
glClear(GL_STENCIL_BUFFER_BIT);
//disable writing to color buffer
glColorMask( GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE );
//disable depth buffer
glDisable(GL_DEPTH_TEST);
//enable writing to stencil buffer
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_NEVER, 1, 0xFF);
glStencilOp(GL_REPLACE, GL_REPLACE, GL_REPLACE);
[self drawStencil];
//re-enable color buffer
glColorMask( GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE );
//only draw where there is a 1
glStencilFunc(GL_EQUAL, 1, 1);
//keep the pixels in the stencil buffer
glStencilOp( GL_KEEP, GL_KEEP, GL_KEEP );
Right now, I'm just trying to draw a square in the stencil buffer and see if I can limit my drawing only to that square. This is the method drawing the square:
- (void)drawStencil
{
// Create a renderbuffer
GLuint renderbuffer;
glGenRenderbuffers(1, &renderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, renderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer*)self.layer];
// Create a framebuffer
GLuint framebuffer;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, renderbuffer);
// Clear
glClearColor(1, 1, 1, 1);
glClear(GL_COLOR_BUFFER_BIT);
// Read vertex shader source
NSString *vertexShaderSource = [NSString stringWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"VertexShader" ofType:#"vsh"] encoding:NSUTF8StringEncoding error:nil];
const char *vertexShaderSourceCString = [vertexShaderSource cStringUsingEncoding:NSUTF8StringEncoding];
// Create and compile vertex shader
GLuint _vertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(_vertexShader, 1, &vertexShaderSourceCString, NULL);
glCompileShader(_vertexShader);
// Read fragment shader source
NSString *fragmentShaderSource = [NSString stringWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"FragmentShader" ofType:#"fsh"] encoding:NSUTF8StringEncoding error:nil];
const char *fragmentShaderSourceCString = [fragmentShaderSource cStringUsingEncoding:NSUTF8StringEncoding];
// Create and compile fragment shader
GLuint _fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(_fragmentShader, 1, &fragmentShaderSourceCString, NULL);
glCompileShader(_fragmentShader);
// Create and link program
GLuint program = glCreateProgram();
glAttachShader(program, _vertexShader);
glAttachShader(program, _fragmentShader);
glLinkProgram(program);
// Use program
glUseProgram(program);
// Define geometry
GLfloat square[] = {
-0.5, -0.5,
0.5, -0.5,
-0.5, 0.5,
0.5, 0.5};
//Send geometry to vertex shader
const char *aPositionCString = [#"a_position" cStringUsingEncoding:NSUTF8StringEncoding];
GLuint aPosition = glGetAttribLocation(program, aPositionCString);
glVertexAttribPointer(aPosition, 2, GL_FLOAT, GL_FALSE, 0, square);
glEnableVertexAttribArray(aPosition);
// Draw
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Present renderbuffer
[context presentRenderbuffer:GL_RENDERBUFFER];
}
So much code and nothing happens... I can draw relentlessly wherever I want without a single stencil test stopping me.
What can I do? How do I check if the stencil buffer has something drawn inside it? If there's a missing puzzle for any of you, I will happily share any other parts of the code.
Any help is greatly appreciated! This has been torturing me for a while now. I will be forever in your debt!
UPDATE
I got the contour thing to work but I didn't use the stencil buffer. I created masks for every drawing area and textures for each mask which I loaded in the fragment shader along with the brush texture. When I tap on an area, I iterate through the array of masks and see which one was selected and bind the mask texture. I will make another post on SO with a more appropriate title and explain it there.
The way you allocate the renderbuffer storage looks problematic:
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer*)self.layer];
The documentation says about this method:
The width, height, and internal color buffer format are derived from the characteristics of the drawable object.
The way I understand it, since your "drawable object" will normally be a color buffer, this will create a color renderbuffer. But you need a renderbuffer with stencil format in your case. I'm not sure if there's a way to do this with a utility method in the context class (the documentation says something about "overriding the internal color buffer format"), but the easiest way is probably to simply call the corresponding OpenGL function directly:
glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, width, height);
If you want to use your own FBO for this rendering, you will also need to create a color buffer for it, and attach it to the FBO. Otherwise you're not really producing any rendering output.
Instead of creating a new FBO, it might be easier to make sure that the default framebuffer has a stencil buffer, and render to it directly. To do this, you can request a stencil buffer for your GLKView derived view by making this call during setup:
[view setDrawableStencilFormat: GLKViewDrawableStencilFormat8];

How to emulate an accumulation buffer in OpenGL es 2.0 (Trailing Particles Effect)

So I have been trying to create a trailing particle effect (seen here) with OpenGL ES 2.0. Unfortunately it appears that the OpenGL command (accumulation buffer) that makes this possible is not available in OpenGL es. This means that it will be necessary to go the LONG way.
This topic described a possible method to do such a thing. However I am quite confused about how to store things inside a buffer and combine buffers. So my thought was to do the following.
Draw the current frame into a texture using a buffer that writes to a texture
Draw the previous frames (but faded) into another buffer.
Put step 1 ontop of step 2. And display that.
Save whatever is displayed for use next frame.
My understanding so far is that buffers store pixel data in the same way textures do, just that buffers can more easily be drawn to using shaders.
So the idea would probably be to render to a buffer THEN move it into a texture.
One theory for doing this that I found is this
In retrospect, you should create two FBOs (each with its own texture);
using the default framebuffer isn't reliable (the contents aren't
guaranteed to be preserved between frames).
After binding the first FBO, clear it then render the scene normally.
Once the scene has been rendered, use the texture as a source and
render it to the second FBO with blending (the second FBO is never
cleared). This will result in the second FBO containing a mix of the
new scene and what was there before. Finally, the second FBO should be
rendered directly to the window (this can be done by rendering a
textured quad, similarly to the previous operation, or by using
glBlitFramebuffer).
Essentially, the first FBO takes the place of the default framebuffer
while the second FBO takes the place of the accumulation buffer.
In summary:
Initialisation:
For each FBO:
- glGenTextures
- glBindTexture
- glTexImage2D
- glBindFrameBuffer
- glFramebufferTexture2D
Each frame:
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, fbo1) glClear glDraw* // scene
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, fbo2) glBindTexture(tex1)
glEnable(GL_BLEND) glBlendFunc glDraw* // full-screen quad
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, 0)
glBindFrameBuffer(GL_READ_FRAMEBUFFER, fbo2) glBlitFramebuffer
unfortunately it didnt have quite enough code (especially for initialization to get me started).
But I have tried, and so far all I have gotten is a disappointing blank screen. I dont really know what I am doing, so probably this code is quite wrong.
var fbo1:GLuint = 0
var fbo2:GLuint = 0
var tex1:GLuint = 0
Init()
{
//...Loading shaders OpenGL etc.
//FBO 1
glGenFramebuffers(1, &fbo1)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), fbo1)
//Create texture for shader output
glGenTextures(1, &tex1)
glBindTexture(GLenum(GL_TEXTURE_2D), tex1)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGB, width, height, 0, GLenum(GL_RGB), GLenum(GL_UNSIGNED_BYTE), nil)
glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), tex1, 0)
//FBO 2
glGenFramebuffers(1, &fbo2)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), fbo2)
//Create texture for shader output
glGenTextures(1, &tex1)
glBindTexture(GLenum(GL_TEXTURE_2D), tex1)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGB, width, height, 0, GLenum(GL_RGB), GLenum(GL_UNSIGNED_BYTE), nil)
glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), tex1, 0)
}
func drawFullScreenTex()
{
glUseProgram(texShader)
let rect:[GLint] = [0, 0, GLint(width), GLint(height)]
glBindTexture(GLenum(GL_TEXTURE_2D), tex1)
//Texture is allready
glTexParameteriv(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_CROP_RECT_OES), rect)
glDrawTexiOES(0, 0, 0, width, height)
}
fun draw()
{
//Prep
glBindFramebuffer(GLenum(GL_DRAW_FRAMEBUFFER), fbo1)
glClearColor(0, 0.1, 0, 1.0)
glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
//1
glUseProgram(pointShader);
passTheStuff() //Just passes in uniforms
drawParticles(glGetUniformLocation(pointShader, "color"), size_loc: glGetUniformLocation(pointShader, "pointSize")) //Draws particles
//2
glBindFramebuffer(GLenum(GL_DRAW_FRAMEBUFFER), fbo2)
drawFullScreenTex()
//3
glBindFramebuffer(GLenum(GL_DRAW_FRAMEBUFFER), 0)
glBindFramebuffer(GLenum(GL_READ_FRAMEBUFFER), fbo2)
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height, GLbitfield(GL_COLOR_BUFFER_BIT), GLenum(GL_NEAREST))
}
BTW here are some sources I found useful.
Site 1
Site 2
Site 3
Site 4
My main question is: Could someone please write out the code for this. I think I understand the theory involved, but I have spent so much time trying in vain to apply it.
If you want a place to start I have the Xcode project that draws dots, and has a blue one that moves across the screen periodically here, also the code that isn't working is in their as well.
Note: If you are going to write code you can use any language c++, java, swift, objective-c it will be perfectly fine. As long as it is for OpenGL-ES
You call glGenTextures(1, &tex1) twice with the same variable tex1. This overwrites the variable. When you later call glBindTexture(GLenum(GL_TEXTURE_2D), tex1), it does not bind the texture corresponding to fbo1, but rather that of fbo2. You need a different texture for every fbo.
As for a reference, below is a sample from a working program of mine which uses multiple FBOs and renders to texture.
GLuint fbo[n];
GLuint tex[n];
init() {
glGenFramebuffers(n, fbo);
glGenTextures(n, tex);
for (int i = 0; i < n; ++i) {
glBindFramebuffer(GL_FRAMEBUFFER, fbo[i]);
glBindTexture(GL_TEXTURE_2D, tex[i]);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex[i], 0);
}
}
render() {
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[0]);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Draw scene into buffer 0
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, fbo[1]);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(cbo[0]);
//Draw full screen tex
...
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, 0);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(cbo[n - 1]);
// Draw to screen
return;
}
A few notes. In order to get it to work I had to add the texture parameters.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
This is because on my system they defaulted to GL_NEAREST_MIPMAP_LINEAR. This did not work for the FBO texture, as no mipmap was generated. Set these to anything you like.
Also, make sure you have textures enabled with
glEnable(GL_TEXTURE_2D)
I hope this will help.

How do I avoid a logical buffer store with GLKView's framebuffer?

Running Xcode's OpenGL ES diagnostic on a very simple app that switches to a second framebuffer and back (with appropriate use of glClear and glInvalidateFramebuffer) shows warnings about a logical buffer store on switching to the second framebuffer:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
// At this point, GLKView's framebuffer is bound
// Clear (to avoid logical buffer load)
glClear(GL_COLOR_BUFFER_BIT);
// Invalidate (to avoid logical buffer store)
glInvalidateFramebuffer(GL_FRAMEBUFFER, 1, (GLenum[]){ GL_COLOR_ATTACHMENT0 });
// Switch to our own framebuffer, and attach a texture as the color attachment
// At this point, Xcode's OpenGL ES tool warns:
// "For best performance keep logical buffer store operations to a minimum."
glBindFramebuffer(GL_FRAMEBUFFER, _framebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _texture, 0);
// Clear (to avoid logical buffer load)
glClear(GL_COLOR_BUFFER_BIT);
// Invalidate (to avoid logical buffer store)
glInvalidateFramebuffer(GL_FRAMEBUFFER, 1, (GLenum[]){ GL_COLOR_ATTACHMENT0 });
// Might want to switch back to GLKView's drawable here, and do more rendering
}
Anyone have any ideas about why the invalidate's not taking hold? Note that in this example, the GLKView only has a color buffer attachment:
view.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
view.drawableStencilFormat = GLKViewDrawableStencilFormatNone;
view.drawableDepthFormat = GLKViewDrawableDepthFormatNone;
view.drawableMultisample = GLKViewDrawableMultisampleNone;
Test app demonstrating this at https://dl.dropboxusercontent.com/u/6956432/test.zip
Cheers!

Using masks in OpenGL ES 2.0 for iOS apps

I have an app where I want user to draw in one certain area of the screen. For this purpose I use a picture of mask which is black in drawable area and transparent in non-drawable area. So user can draw only on the area of the screen inside the mask and inside the black area of the mask.
I've tried to implement it via stencil buffer and modified some code from GLPaint sample project: http://pastebin.com/94MBr1Su
However I still don't get the idea of stencil buffers usage. Can anyone please help me with code examples of stencil buffers for my issue? Also, is there any way to implement this without stencil buffers?
Because your mask is a texture, stencil buffer is not a good idea.
during mask rendering, you must use "discard;" for transparent pixels in your fragment shader
say welcome to antialiasing problems
For your curiosity, here some code to configure a mask with stencil buffer:
const bool invert_mask = false; // allow to draw inside or outside mask
unsigned mask_id = 1; // you can use this code multiple time without clearing stencil, just increment mask_id
glEnable(GL_STENCIL_TEST);
// write on stencil_mask
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glStencilFunc(GL_ALWAYS, mask_id, 0);
// remove depth test and color writing
glDepthMask(GL_FALSE);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
// TODO: draw geometry of mask here. (if you use a texture, dont forget to use discard in the shader
// enabled depth & color writing
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDepthMask(GL_TRUE);
// no stencil write
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
// test stencil value
glStencilFunc(invert_mask ? GL_NOTEQUAL : GL_EQUAL, mask_id, 0xff);
// TODO: draw "clipped" geometry here
// finally, remove stencil test
glDisable(GL_STENCIL_TEST);
The simplest way is to NOT USE STENCIL AT ALL. Create a grayscale screen-size texture, write your mask inside. Then bind it in your fragment shader:
uniform LOW_P sampler2D u_diffuse_sampler;
uniform LOW_P sampler2D u_mask_sampler;
varying mediump vec2 v_texcoord;
void main(void) {
gl_FragColor = texture2D(u_diffuse_sampler, v_texcoord) * texture2D(u_mask_sampler, v_texcoord).r;
}

Resources