OpenGL ES 2 - Drawing GL_POINTS directly vs indirectly - ios

I am creating an iOS app for drawing / sketching and right now encountering a problem when I draw GL_POINTS indirectly to an FBO that then this FBO is stamped onto a final FBO.
Here is the result when I draw the GL_POINTS DIRECTLY to an FBO
And here is the result when I draw the points INDIRECTLY by drawing to an FBO and then draw this FBO onto another FBO
As you can see, the indirect method didn't blend quite right. I don't know if the problem is because of my blend mode is wrong or because there's a loss precision when drawing indirectly.
Here is my algorithm :
I. Drawing the points to an offscreen FBO named drawingFramebuffer:
// pre-multiplied alpha
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glBindFramebuffer(GL_FRAMEBUFFER, drawingFramebuffer);
// clear drawing FBO
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
...
// draw the points
glDrawArrays(GL_POINTS, 0, totalPoints);
in the fragment shader
uniform sampler2D brushTexture;
uniform highp vec4 brushColor;
void main()
{
highp vec4 textureAlpha = texture2D(brushTexture, gl_PointCoord.xy);
gl_FragColor = vec4(brushColor.rgb * textureAlpha.a, textureAlpha.a);
}
II. And then, stamping the drawingFramebuffer onto final Framebuffer by using a quad
// draw the texture using pre-multiplied alpha
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glBindFramebuffer(GL_FRAMEBUFFER, finalFramebuffer);
...
// draw the quad vertices using triangle strip
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
in the fragment shader
uniform sampler2D texture;
varying highp vec2 textureCoord;
void main()
{
highp vec4 textureColor = texture2D(texture, textureCoord);
gl_FragColor = textureColor;
}
I'm utterly confused, how can drawing directly and indirectly yields different results when the blend modes are the same.
Thanks guys/girls for any help!
--- Edited to Add ---
After some calculations with excel, I found out that my blending is already correct, so I suspect the problem is the loss of precision that's happening when reading the drawing FBO

Okay, I've finally fixed it by :
Disregarding the RGB calculation when drawing the GL_POINTS. Bottom line is, rgb value is the culprit. So I'm only focusing on the alpha calculation when drawing GL_POINTS (by using default pre-multiplied blending).
When 'stamping' the drawing FBO, this is when I applied the coloring. By inserting the color value as a uniform and set the fragment color as this color value multiplied by alpha.
I think this is a method that Procreate or other drawing apps use. Although now I have a problem of what would happen if the color value is varied (not a uniform)...

Related

Is it possible to invert the mask for GPUImageMaskFilter?

I am masking a photo with a frame mask image. The mask was created for Core Graphics which is inverted.
Black means "visible" and white means "fully transparent". The opposite of Photoshop masks.
Is it possible to invert the mask for the filter so the effect is reversed?
It is very easy to change rgb color to inverse.
In a Fragment shader, you can do this exprestion
uniform sampler2D tex0 ;
uniform sampler2D tex1; // mask
void main()
{
vec4 nowcolor = texture2D(texture1, uv);
vec4 newmask = vec4(1.0,1.0,1.0,1.0) - nowcolor;
// and use the new mask here instead the old mask texture
}

Specifing the color for webGL cube in fragment shader

I want to draw a single colour cube in WebGL and I want to specify the color with in the fragment shader. I know that I can do that when I drawing a square. To eloborate my question can I avoid using the color buffer in the way that it is mention in this tutorial.
"MDN WebGl Tutorial"
If you really "want to specify the color within the fragment shader", this is your fragment shader:
precision mediump float;
void main(void) {
gl_FragColor = vec4 (0.0, 1.0, 0.0, 1.0);
}
This would give you green (values in the vec4() are r, g, b, a). Of course, in this case you can eliminate all the color stuff under the heading "Define the vertices' colors" in the tutorial.

Draw textured quad in background of opengl scene

Code flow is as follows:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
renderScene();
renderTexturedQuadForBackground();
presentRenderbuffer();
Is there any way for me to get that textured quad rendering code to show behind the scene in spite of the fact that the scene renders first? Assume that I cannot change that the rendering of the background textured quad will happen directly before I present the render buffer.
Rephrased: I can't change the rendering order. Essentially what I want is that every pixel that would've been colored only by glClearColor to instead be colored by this textured quad.
The easiest solution is to define the quad in normalized device coordinates directly and set the z-value to 1. You then don't need to project the quad and it will be screen-filling and behind anything else - except stuff that's also at z=1 after projection and perspective divide.
That's pretty much the standard procedure for screen-aligned quads, except there is usually no need to put the quad at z=1, not that it would matter. Usually, full screen quads are simply used to be able to process at least once fragment per pixel, normally a 1:1 mapping of fragments an pixels. Deferred shading, post-processing fx or image processing in general are the usual suspects. Since you only render the quad in most cases (and nothing else) the depth value is irrelevant, as long as it's inside the unit cube and not dropped by the depth test, for instance when you put it at z=1 and your depth functions is LESS.
EDIT: I made a little mistake. NDCs are defined in a left-handed coordinate system, meaning that the near plane is mapped to -1 and the far plane is mapped to 1. So, you need to define your quad in NDCs with a z value of 1 and set the DepthFunc to LEQUAL. Alternatively, you can leave the depth function untouched and simply subtract a very small value from 1.f:
float maxZ = 1.f - std::numeric_limits<float>::epsilon();
EDIT2: Let's assume you want to render a screen-aligned quad which is drawn behind everything else and with appropriate texture coordinates. Please note: I'm on a desktop here, so I'm writing core GL code which doesn't map to GLES 2.0 directly. However, there is nothing in my examnple you can't do with GLES and GLSL ES 2.0.
You may define the vertex attribs of the quad like this (without messing with the depth func):
GLfloat maxZ = 1.f - std::numeric_limits<GLfloat>::epsilon ();
// interleaved positions an tex coords
GLfloat quad[] = {-1.f, -1.f, maxZ, 1.f, // v0
0.f, 0.f, 0.f, 0.f, // t0
1.f, -1.f, maxZ, 1.f, // ...
1.f, 0.f, 0.f, 0.f,
1.f, 1.f, maxZ, 1.f,
1.f, 1.f, 0.f, 0.f,
-1.f, 1.f, maxZ, 1.f,
0.f, 1.f, 0.f, 0.f};
GLubyte indices[] = {0, 1, 2, 0, 2, 3};
The VAO and buffers are setup accordingly:
// generate and bind a VAO
gl::GenVertexArrays (1, &vao);
gl::BindVertexArray (vao);
// setup our VBO
gl::GenBuffers (1, &vbo);
gl::BindBuffer (gl::ARRAY_BUFFER, vbo);
gl::BufferData (gl::ARRAY_BUFFER, sizeof(quad), quad, gl::STATIC_DRAW);
// setup out index buffer
gl::GenBuffers (1, &ibo);
gl::BindBuffer (gl::ELEMENT_ARRAY_BUFFER, ibo);
gl::BufferData (gl::ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, gl::STATIC_DRAW);
// setup our vertex arrays
gl::VertexAttribPointer (0, 4, gl::FLOAT, gl::FALSE_, 8 * sizeof(GLfloat), 0);
gl::VertexAttribPointer (1, 4, gl::FLOAT, gl::FALSE_, 8 * sizeof(GLfloat), (GLvoid*)(4 * sizeof(GLfloat)));
gl::EnableVertexAttribArray (0);
gl::EnableVertexAttribArray (1);
The shader code comes to a very, very simple pass-through vertex shader and, for simplicty a fragment shader which in my example simply exports the interpolated tex coords:
// Vertex Shader
#version 430 core
layout (location = 0) in vec4 Position;
layout (location = 1) in vec4 TexCoord;
out vec2 vTexCoord;
void main()
{
vTexCoord = TexCoord.xy;
// you don't need to project, you're already in NDCs!
gl_Position = Position;
}
//Fragment Shader
#version 430 core
in vec2 vTexCoord;
out vec4 FragColor;
void main()
{
FragColor = vec4(vTexCoord, 0.0, 1.0);
}
As you can see, the values written to gl_Position are simply the vertex positions passed to the shader invocation. No projection takes place because the result of projection and perspective divide is nothing else than normalized device coordinates. Since we already are in NDCs, we don't need projection and perspective divide and so simply pass through the positions unaltered.
The final depth is very close to the maximum of the depth range and so the quad will appear to be behind anthing else in your scene.
You can use the texcoords as usual.
I hope you get the idea. Except for the explicit attrib locations which aren't supported by GLES 2.0 (i.e. replace the stuff with BindAttribLocation() calls instead) you shouldn't have to do anything.
There is a way, but you have to put the quad behind the scene. If your quad is constructed correctly you can
enable DEPTH_TEST by using
glEnable(DEPTH_TEST);
and then by using
glDepthFunc(GL_GREATER);
before rendering your background.
Your quad will be rendered behind the scene. But as I said, this only works, when your geometry is literally located behind the scene.

Applying part of a texture (sprite sheet / texture map) to a point sprite in iOS OpenGL ES 2.0

It seems this should be easy but I'm having a lot of difficulty using part of a texture with a point sprite. I have googled around extensively and turned up various answers but none of these deal with the specific issue I'm having.
What I've learned so far:
Basics of point sprite drawing
How to deal with point sprites rendering as solid squares
How to alter orientation of a point sprite
How to use multiple textures with a point sprite, getting closer here..
That point sprites + sprite sheets has been done before, but is only possible in OpenGL ES 2.0 (not 1.0)
Here is a diagram of what I'm trying to achieve
Where I'm at:
I have a set of working point sprites all using the same single square image. Eg: a 16x16 image of a circle works great.
I have an Objective-C method which generates a 600x600 image containing a sprite-sheet with multiple images. I have verified this is working by applying the entire sprite sheet image to a quad drawn with GL_TRIANGLES.
I have used the above method successfully to draw parts of a sprite sheet on to quads. I just cant get it to work with point sprites.
Currently I'm generating texture coordinates pointing to the center of the sprite on the sprite sheet I'm targeting. Eg: Using the image at the bottom; star: 0.166,0.5; cloud: 0.5,0.5; heart: 0.833,0.5.
Code:
Vertex Shader
uniform mat4 Projection;
uniform mat4 Modelview;
uniform float PointSize;
attribute vec4 Position;
attribute vec2 TextureCoordIn;
varying vec2 TextureCoord;
void main(void)
{
gl_Position = Projection * Modelview * Position;
TextureCoord = TextureCoordIn;
gl_PointSize = PointSize;
}
Fragment Shader
varying mediump vec2 TextureCoord;
uniform sampler2D Sampler;
void main(void)
{
// Using my TextureCoord just draws a grey square, so
// I'm likely generating texture coords that texture2D doesn't like.
gl_FragColor = texture2D(Sampler, TextureCoord);
// Using gl_PointCoord just draws my whole sprite map
// gl_FragColor = texture2D(Sampler, gl_PointCoord);
}
What I'm stuck on:
I don't understand how to use the gl_PointCoord variable in the fragment shader. What does gl_PointCoord contain initially? Why? Where does it get its data?
I don't understand what texture coordinates to pass in. For example, how does the point sprite choose what part of my sprite sheet to use based on the texture coordinates? I'm used to drawing quads which have effectively 4 sets of texture coordinates (one for each vertex), how is this different (clearly it is)?
A colleague of mine helped with the answer. It turns out the trick is to utilize both the size of the point (in OpenGL units) and the size of the sprite (in texture units, (0..1)) in combination with a little vector math to render only part of the sprite-sheet onto each point.
Vertex Shader
uniform mat4 Projection;
uniform mat4 Modelview;
// The radius of the point in OpenGL units, eg: "20.0"
uniform float PointSize;
// The size of the sprite being rendered. My sprites are square
// so I'm just passing in a float. For non-square sprites pass in
// the width and height as a vec2.
uniform float TextureCoordPointSize;
attribute vec4 Position;
attribute vec4 ObjectCenter;
// The top left corner of a given sprite in the sprite-sheet
attribute vec2 TextureCoordIn;
varying vec2 TextureCoord;
varying vec2 TextureSize;
void main(void)
{
gl_Position = Projection * Modelview * Position;
TextureCoord = TextureCoordIn;
TextureSize = vec2(TextureCoordPointSize, TextureCoordPointSize);
// This is optional, it is a quick and dirty way to make the points stay the same
// size on the screen regardless of distance.
gl_PointSize = PointSize / Position.w;
}
Fragment Shader
varying mediump vec2 TextureCoord;
varying mediump vec2 TextureSize;
uniform sampler2D Sampler;
void main(void)
{
// This is where the magic happens. Combine all three factors to render
// just a portion of the sprite-sheet for this point
mediump vec2 realTexCoord = TextureCoord + (gl_PointCoord * TextureSize);
mediump vec4 fragColor = texture2D(Sampler, realTexCoord);
// Optional, emulate GL_ALPHA_TEST to use transparent images with
// point sprites without worrying about z-order.
// see: http://stackoverflow.com/a/5985195/806988
if(fragColor.a == 0.0){
discard;
}
gl_FragColor = fragColor;
}
Point sprites are composed of a single position. Therefore any "varying" values will not actually vary, because there's nothing to interpolate between.
gl_PointCoord is a vec2 value where the XY values are between [0, 1]. They represent the location on the point. (0, 0) is the bottom-left of the point, and (1, 1) is the top-right.
So you want to map (0, 0) to the bottom-left of your sprite, and (1, 1) to the top-right. To do that, you need to know certain things: the size of the sprites (assuming they're all the same size), the size of the texture (because the texture fetch functions take normalized texture coordinates, not pixel locations), and which sprite is currently being rendered.
The latter can be set via a varying. It can just be a value that's passed as per-vertex data into the varying in the vertex shader.
You use that plus the size of the sprites to determine where in the texture you want to pull data for this sprite. Once you have the texel coordinates you want to use, you divide them by the texture size to produce normalized texture coordinates.
In any case, point sprites, despite the name, aren't really meant for sprite rendering. It would be easier to use quads/triangles for that, as you can have more assurance over exactly what positions everything has.

GLSL Shaders compile but don't draw anything on Windows

I'm trying to port some OpenGL rendering code I wrote for iOS to a Windows app. The code runs fine on iOS, but on Windows it doesn't draw anything. I've narrowed the problem down to this bit of code as fixed function stuff (such as glutSolidTorus) draws fine, but when shaders are enabled, nothing works.
Here's the rendering code:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_INDEX_ARRAY);
// Set the vertex buffer as current
this->vertexBuffer->MakeActive();
// Get a reference to the vertex description to save copying
const AT::Model::VertexDescription & vd = this->vertexBuffer->GetVertexDescription();
std::vector<GLuint> handles;
// Loop over the vertex descriptions
for (int i = 0, stride = 0; i < vd.size(); ++i)
{
// Get a handle to the vertex attribute on the shader object using the name of the current vertex description
GLint handle = shader.GetAttributeHandle(vd[i].first);
// If the handle is not an OpenGL 'Does not exist' handle
if (handle != -1)
{
glEnableVertexAttribArray(handle);
handles.push_back(handle);
// Set the pointer to the vertex attribute, with the vertex's element count,
// the size of a single vertex and the start position of the first attribute in the array
glVertexAttribPointer(handle, vd[i].second, GL_FLOAT, GL_FALSE,
sizeof(GLfloat) * (this->vertexBuffer->GetSingleVertexLength()),
(GLvoid *)stride);
}
// Add to the stride value with the size of the number of floats the vertex attr uses
stride += sizeof(GLfloat) * (vd[i].second);
}
// Draw the indexed elements using the current vertex buffer
glDrawElements(GL_TRIANGLES,
this->vertexBuffer->GetIndexArrayLength(),
GL_UNSIGNED_SHORT, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_INDEX_ARRAY);
// Disable the vertexattributearrays
for (int i = 0, stride = 0; i < handles.size(); ++i)
{
glDisableVertexAttribArray(handles[i]);
}
It's inside a function that takes a shader as a parameter, and the vertex description is a list of pairs: attribute handles to number of elements. Uniforms are being set outside this function. I'm enabling the shader for use before it's passed in to the function. Here are the two shader sources:
Vertex:
attribute vec3 position;
attribute vec2 texCoord;
attribute vec3 normal;
// Uniforms
uniform mat4 Model;
uniform mat4 View;
uniform mat4 Projection;
uniform mat3 NormalMatrix;
/// OUTPUTS
varying vec2 o_texCoords;
varying vec3 o_normals;
// Vertex Shader
void main()
{
// Do the normal position transform
gl_Position = Projection * View * Model * vec4(position, 1.0);
// Transform the normals to world space
o_normals = NormalMatrix * normal;
// Pass texture coords on for interpolation
o_texCoords = texCoord;
}
Fragment:
varying vec2 o_texCoords;
varying vec3 o_normals;
/// Fragment Shader
void main()
{
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
}
I'm running OpenGL 2.1 with Shader language 1.2. I'd be most appreciative for any help anyone can give me.
I'm seeng that you are assigning black color for the output color for the fragment in your fragment shader. Try changing that to something like
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
and see if the objects in the scene will be colored with green.
I came back to this recently and it seems that I wasn't checking for errors during rendering, it was giving me a 1285 error GL_OUT_OF_MEMORY after calling glDrawElements(). This lead me to check the vertex buffer objects to see if they contained any data and it turns out I wasn't properly deep copying them in a wrapper class, and as a result they were being deleted before any rendering happened. Fixing this sorted the issue.
Thank you for your suggestions.

Resources