Will WebGL ever render points as circles? - webgl

On desktop OpenGL, points will sometimes be rendered as circles (if you have set gl_PointSize in the vertex shader). I am tinkering with WebGL and it seems to consistently render points as squares (when gl_PointSize is set). Is there a way to get them to render as circles?

Yes, there is a solution. You can do that using point sprites. Just send texture to shader and using alpha blending cut of unnecessary part of sprite.
Normally (in desktop OpenGL) you may see points rendered as circles when you have got MSAA and POINT_SMOOTH feature enabled.
Below you have links where you can get all informations you need :)
OpenGL ES 2.0 Equivalent for ES 1.0 Circles Using GL_POINT_SMOOTH?
http://klazuka.tumblr.com/post/249698151/point-sprites-and-opengl-es-2-0

Related

How can I make my WebGL Coordinate System "Top Left" Oriented?

Because of computation efficiency, I use a fragment shader to implement a simple 2D metaballs algorithm. The data of the circles to render is top-left oriented.
I have everything working, except that the origin of WebGL's coordinate system (bottom-left) is giving me a hard time: Obviously, the rendered output is mirrored along the horizontal axis.
Following https://webglfundamentals.org/webgl/lessons/webgl-2d-rotation.html (and others), I tried to rotate things using a vertex shader. Without any success unfortunately.
What is the most simple way of achieving the reorientation of WebGL's coordinate system?
I'd appreciate any hints and pointers, thanks! :)
Please find a working (not working ;) ) example here:
https://codesandbox.io/s/gracious-fermat-znbsw?file=/src/index.js
Since you are using gl_FragCoord in your pixels shader, you can't do it from the vertex shader becasuse gl_FragCoord is the canvas coordinates but upside down. You could easily invert it in javascript in your pass trough to WebGL
gl.uniform3fv(gl.getUniformLocation(program, `u_circles[${i}]`), [
circles[i].x,
canvas.height - circles[i].y - 1,
circles[i].r
]);
If you want to do it in the shader and keep using gl_FragCoord then you should pass the height of the canvas to the shader using a uniform and do the conversion of y there by doing something like
vec2 screenSpace = vec2(gl_FragCoord.x, canvasHeight - gl_FragCoord.y - 1);
The -1 is because the coordinates start at 0.

Semi-transparency in OpenGL ES 2.0

I'm running into a problem with semi-transparency with OpenGL ES 2.0 on iOS. My scene is rather simple. It consists of a grid of cubes, some of them should appear solid whereas the others should be rendered semi-transparent. I started out with the code below for setting up OpenGL.
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This renders incorrect transparency for some angles because of the depth-testing and culling. See the two images below
I tried to disable curling and depth-testing and enabled alpha-testing. The result is correct transparency but no textures (see image below).
//glEnable(GL_CULL_FACE);
//glEnable(GL_DEPTH_TEST);
//glDepthFunc(GL_LEQUAL);
glAlphaFunc(GL_GREATER, 0.5);
glEnable(GL_ALPHA_TEST);
glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
I'm using GLKit to load textures and a GLKBaseEffect to render the scene. Does someone has a hint how to achieve the same result as in the first image with correct transparency for all perspectives? Thank you :)
Your main two options are:
Sort all the polygons in your scene, and make sure no polygon intersects any other (because then you can't order them)
Use a sort-independent blending mode instead, such as an additive or subtractive blend.
If you really do just want a grid of cubes, changing the rendering order to be suitable for any viewpoint shouldn't be too tricky, as you just need to traverse the cubes in a different order rather than actually sort anything.

How do you add light with multiple passes in OpenGL?

I have two functions that I want to combine the results of:
drawAmbient
drawDirectional
They each work fine individually, drawing the scene with the ambient light only, or the directional light only. I want to show both the ambient and directional light but am having a bit of trouble. I try this:
[self drawAmbient];
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
[self drawDirectional];
glDisable(GL_BLEND);
but I only see the results from first draw. I calculate the depth in the same way for both sets of draw calls. I could always just render to texture and blend the textures, but that seems redundant. Is there I way that I can add the lighting together when rendering to the default framebuffer?
You say you calculate the depth the same way in both passes. This is of course correct, but as the default depth comparison function is GL_LESS, nothing will actually be rendered in the second pass, since the depth is never less than what is currently in the depth buffer.
So for the second pass just change the depth test to
glDepthFunc(GL_EQUAL);
and then back to
glDepthFunc(GL_LESS);
Or you may also set it to GL_LEQUAL for the whole runtime to cover both cases.
As far as I know, you should render lighting to separate render targets and then combine them. So you will have rendered scene into these targets:
textured without lighting
summary diffuse lighting (fill with ambient color and additively render all light sources)
summary specular lighting (if you use specular component)
Then combine textures, so final_color = textured * diffuse + specular.

"warping" an image on iOS

I'm trying to find a way to do something similar to this on iOS:
Does anyone know a simple way to do it?
I don't know of a oneliner to do this, but you can use OpenGL to render a textured grid with quads, which has the texture coordinates equally distributed.
Exampe of 2x2 grid:
{0.0,1.0} {0.33333,1.0} {1.0,1.0}
{0.0,0.33333} {0.33333,0.33333} {1.0,0.33333}
{0.0,0.0} {0.33333,0.0} {1.0,0.0}
If you move shared vertices of adjacent quads (like in your example) while texture coords remain, you get a warp effect. You need a trivial vertex and fragment shader when using OpenGL ES, especially if you want to smoothen the warp effect, which is linearly interpolated per quad/triangle in its simple form.

GPGPU programming with OpenGL ES 2.0

I am trying to do some image processing on the GPU, e.g. median, blur, brightness, etc. The general idea is to do something like this framework from GPU Gems 1.
I am able to write the GLSL fragment shader for processing the pixels as I've been trying out different things in an effect designer app.
I am not sure however how I should do the other part of the task. That is, I'd like to be working on the image in image coords and then outputting the result to a texture. I am aware of the gl_FragCoords variable.
As far as I understand it it goes like that: I need to set up a view (an orthographic one maybe?) and a quad in such a way so that the pixel shader would be applied once to each pixel in the image and so that it would be rendering to a texture or something. But how can I achieve that considering there's depth that may make things somewhat awkward to me...
I'd be very grateful if anyone could help me with this rather simple task as I am really frustrated with myself.
UPDATE:
It seems I'll have to use an FBO, getting one like this: glBindFramebuffer(...)
Use this tutorial, it's targeted at OpenGL 2.0, but most features are available in ES 2.0, the only thing i have doubts is floating point textures.
http://www.mathematik.uni-dortmund.de/~goeddeke/gpgpu/tutorial.html
Basically, you need 4 vertex positions (as vec2) of a quad (with corners (-1,-1) and (1,1)) passed as a vertex attribute.
You don't really need a projection, because the shader will not need any.
Create an FBO, bind it and attach the target surface. Don't forget to check the completeness status.
Bind the shader, set up input textures and draw the quad.
Your vertex shader may look like this:
#version 130
in vec2 at_pos;
out vec2 tc;
void main() {
tc = (at_pos+vec2(1.0))*0.5; //texture coordinates
gl_Position = vec4(at_pos,0.0,1.0); //no projection needed
}
And a fragment shader:
#version 130
in vec2 tc;
uniform sampler2D unit_in;
void main() {
vec4 v = texture2D(unit_in,tc);
gl_FragColor = do_something();
}
If you want an example, I created this project for iOS devices for processing frames of video grabbed from the camera using OpenGL ES 2.0 shaders. I explain more about it in my writeup here.
Basically, I pull in the BGRA data for a frame and create a texture from that. I then use two triangles to generate a rectangle and map the texture on that. A shader is used to directly display the image onscreen, perform some effect on the image and display it, or perform some effect on the image while in an offscreen FBO. In the last case, I can then use glReadPixels() to pull in the image for some CPU-based processing, but ideally I want to fix this so that the processed image is just passed on as a texture to the next set of shaders.
You should also check out ogles_gpgpu, which even supports Android systems. An overview about this topic is given in this publication: Parallel Computing for Digital Signal Processing on Mobile Device GPUs.
You can do more advanced GPGPU things with OpenGL ES 3.0 now. Check out this post for example. Apple now also has the "Metal API" which allows even more GPU compute operations. Both, OpenGL ES 3.x and Metal are only supported by newer devices with A7 chip.

Resources